Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyOV] Allow creation of Tensors from pointers #27725

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

p-wysocki
Copy link
Contributor

@p-wysocki p-wysocki commented Nov 25, 2024

Details:

  • Backstory: [Bug]: Inference fails if data is moved to GPU #25484
  • The original issue happens because a GPU Torch tensor is being passed to OpenVINO inference
  • Python API always creates a np.array before passing the data to Tensor constructors
  • numpy only supports CPU memory (more info)
  • Because of it, the data has to be copied to CPU memory before initializing np.array
  • Because the model in the user's case is running on GPU, it has to be copied again, so the final copying is GPU -> CPU -> GPU
  • This causes a significant performance loss
  • The new constructor uses a pointer directly to GPU memory to create Tensor
  • Using it reduces the average inference time in customer's script from 100ms to 50ms on my machine

Example usage:

image = torch.rand(128, 3, 224, 224)
image = image.to(torch.device("xpu"), memory_format=torch.channels_last)
data_ptr = image.detach().data_ptr()
ov_tensor = Tensor(data_ptr, Shape(image.shape), pt_to_ov_type_map[str(image.dtype)])

To be discussed

  • Since the new ctor does not take array as an argument but a pointer to it, the reference count for array is not incremented (see the new test and the one above it). The ctor can't take the array as argument and retrieve its pointer in the binding, because it would interfere with other Tensor ctor overloads.
  • Reference count incrementation can be forced with a pure Python wrapper, but then the Tensor would need to be created with a separate util such as tensor_from_ptr().
  • We can't expand data_dispatcher.py beacuse there already is a dispatch for an int
  • Is there another option?

Tickets:

@akuporos akuporos added this to the 2025.0 milestone Nov 26, 2024
@@ -37,6 +37,38 @@ void regclass_Tensor(py::module m) {
:type shared_memory: bool
)");

cls.def(py::init([](int64_t data_ptr, const ov::Shape& shape, const ov::element::Type& ov_type) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can data_ptr be a void * here?
if not, I suggest to use uint64_t

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same in line

@p-wysocki p-wysocki marked this pull request as draft November 26, 2024 11:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: Python API OpenVINO Python bindings
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants